Approximation of functions by perceptron networks with bounded number of hidden units
نویسنده
چکیده
We examine the e ect of constraining the number of hidden units For one hidden layer networks with fairly general type of units including perceptrons with any bounded activation function and radial basis function units we show that when also the size of parameters is bounded the best approximation property is satis ed which means that there always exists a parameterization achieving the global minimum of any error function generated by a supremum or Lp norm We also show that the only functions that can be approximated with arbitrary accuracy by increasing parameters in networks with a xed number of Heaviside perceptrons are functions equal almost everywhere to functions that can be exactly computed by such networks We give a necessary condition on values that such piecewise constant functions must achieve
منابع مشابه
Best approximation by Heaviside perceptron networks
In Lp-spaces with p an integer from [1, infinity) there exists a best approximation mapping to the set of functions computable by Heaviside perceptron networks with n hidden units; however for p an integer from (1, infinity) such best approximation is not unique and cannot be continuous.
متن کاملAn Integral Upper Bound for Neural Network Approximation
Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the ...
متن کاملRates of approximation of real-valued boolean functions by neural networks
We give upper bounds on rates of approximation of real-valued functions of d Boolean variables by one-hidden-layer perceptron networks. Our bounds are of the form c/n where c depends on certain norms of the function being approximated and n is the number of hidden units. We describe sets of functions where these norms grow either polynomially or exponentially with d.
متن کاملApproximation with neural networks activated by ramp sigmoids
Accurate and parsimonious approximations for indicator functions of d-dimensional balls and related functions are given using level sets associated with the thresholding of a linear combination of ramp sigmoid activation functions. In neural network terminology, we are using a single-hidden-layer perceptron network implementing the ramp sigmoid activation function to approximate the indicator o...
متن کاملBounds on Sparsity of One-Hidden-Layer Perceptron Networks
Limitations of one-hidden-layer (shallow) perceptron networks to sparsely represent multivariable functions is investigated. A concrete class of functions is described whose computation by shallow perceptron networks requires either large number of units or is unstable due to large output weights. The class is constructed using pseudo-noise sequences which have many features of random sequences...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural Networks
دوره 8 شماره
صفحات -
تاریخ انتشار 1995